Skip to content

Add benchmarks for basic prebuilt MeterFilter implementations #6174

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

etki
Copy link

@etki etki commented Apr 25, 2025

What is this?

This PR contains only benchmarks for some basic API, MeterFilter implementations and a couple of Tags methods - no new symbols visible for users, no changes in production code. It goes a bit extra by exploiting different inputs so that different usage patterns would be evaluated and "better at this count, yet worse at that count" situations had less chance of getting unnoticed (assuming that whoever works on improving performance of specific code part runs these benchmarks prior to and after the changes).

Is it really needed?

I'm prepping other PRs for some non-dramatic updates in filters and tags code (also no new symbols or new functionality, but changing the existing one preserving the present functionality), so these benchmarks just had to emerge. It's not up to me to decide if they are needed in the project, but they came up naturally and they shouldn't take any resources from anyone except for the time from people explicitly running them, so i don't see an obstacle here.

But at the same time i'm just having fun, so i don't have a problem with any requested changes or removals.

There are already benchmarks for tags!

Yes, but: 1) i've noticed them just when i was preparing this PR, 2) they use static input (which also comes with a fixed size). I don't have any energy to check the assembly, but JIT theoretically might infer wrong assumptions from that; i also think that dynamic input size is a must here, as the whole sort-the-array story is non-linear from the very beginning. Not sure what do with these, but you can guide me.

Aren't these benchmarks too sophisticated?

They are sophisticated, but not "too" for me; they aren't entirely necessary, but since they are coming for free and doing the more thorough matrix evaluation, i see this as a benefit. But i'm farming my XP here, so if there's a concern against checking these in, i'm OK with it.

Any interesting results?

These are just the baseline measurements, i.e. they exist to compare two different revisions. There are some hints on the situations that can be improved. Selecting the biggest offender (using Intel N100 @ 2GHz) for a dramatic effect:

name ignored tags tags in identifier result unit
config.filter.MeterFilterIgnoreTagsBenchmark.baseline 64 64 8439.085 ± 7.119 ns/op
config.filter.MeterFilterIgnoreTagsBenchmark.baseline:instructions:u 64 64 38896.080 #/op

Thing's happening here is just taking ~64 elements (there are many samples, most of which have 64 tags) and filter out the ones that match a set of 64 keys. While here it comes at 8.4usec, this could be just an O(n) task executing in a matter of a couple of hundreds of nanoseconds. My forthcoming PRs would somewhat address things like that (being honest: this is the biggest offender, other places are not so impressive), but getting to the best position would require #6113.

@etki etki force-pushed the filter-benchmarks branch 4 times, most recently from e7a25c4 to bb73603 Compare April 25, 2025 03:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Issues related to general performance
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants